Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

نویسنده

  • C. Cartis
چکیده

Given a sufficiently smooth vector-valued function r(x), a local minimizer of ‖r(x)‖2 within a closed, non-empty, convex set F is sought by modelling ‖r(x)‖q2/q with a p-th order Taylorseries approximation plus a (p + 1)-st order regularization term for given even p and some appropriate associated q. The resulting algorithm is guaranteed to find a value x̄ for which ‖r(x̄)‖2 ≤ ǫp or χ(x̄) ≤ ǫd, for some first-order criticality measure χ(x) of ‖r(x)‖2 within F , using at most O(max{max(ǫd, χmin) ,max(ǫp, rmin) −1/2i}) evaluations of r(x) and its derivatives; here rmin and χmin ≥ 0 are any lower bounds on ‖r(x)‖2 and χ(x), respectively, and 2 is the highest power of 2 that divides p. An improved bound is possible under a suitable full-rank assumption.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization

We propose a new termination criteria suitable for potentially singular, zero or non-zero residual, least-squares problems, with which cubic regularization variants take at most O(ǫ) residualand Jacobian-evaluations to drive either the Euclidean norm of the residual or its gradient below ǫ; this is the best-known bound for potentially rank-deficient nonlinear least-squares problems. We then app...

متن کامل

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order p (for p ≥ 1) and to assume Lipschitz continuity of the p-th derivative, then an -approximate first-order critical point can be computed in at most O( −(p+1)/p) evaluations of the problem’s obj...

متن کامل

A Complete Worst-case Analysis of Kannan’s Shortest Lattice Vector Algorithm

Computing a shortest nonzero vector of a given euclidean lattice and computing a closest lattice vector to a given target are pervasive problems in computer science, computational mathematics and communication theory. The classical algorithms for these tasks were invented by Ravi Kannan in 1983 and, though remarkably simple to establish, their complexity bounds have not been improved for almost...

متن کامل

On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(ǫ) to drive the norm of the gradient below ǫ. This shows that the upper bound of O(ǫ) evaluations known for the steepest descent is tight, and that Newton’s method may be as slow a...

متن کامل

An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems

The affine rank minimization problem, which consists of finding a matrix of minimum rank subject to linear equality constraints, has been proposed in many areas of engineering and science. A specific rank minimization problem is the matrix completion problem, in which we wish to recover a (low-rank) data matrix from incomplete samples of its entries. A recent convex relaxation of the rank minim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015